Image Classification

In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.

Get the Data

Run the following cell to download the CIFAR-10 dataset for python.


In [1]:
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile

cifar10_dataset_folder_path = 'cifar-10-batches-py'

class DLProgress(tqdm):
    last_block = 0

    def hook(self, block_num=1, block_size=1, total_size=None):
        self.total = total_size
        self.update((block_num - self.last_block) * block_size)
        self.last_block = block_num

if not isfile('cifar-10-python.tar.gz'):
    with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
        urlretrieve(
            'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
            'cifar-10-python.tar.gz',
            pbar.hook)

if not isdir(cifar10_dataset_folder_path):
    with tarfile.open('cifar-10-python.tar.gz') as tar:
        tar.extractall()
        tar.close()


tests.test_folder_path(cifar10_dataset_folder_path)


All files found!

Explore the Data

The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:

  • airplane
  • automobile
  • bird
  • cat
  • deer
  • dog
  • frog
  • horse
  • ship
  • truck

Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.

Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.


In [2]:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'

import helper
import numpy as np

# Explore the dataset
batch_id = 1
sample_id = 10
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)


Stats of batch 1:
Samples: 10000
Label Counts: {0: 1005, 1: 974, 2: 1032, 3: 1016, 4: 999, 5: 937, 6: 1030, 7: 1001, 8: 1025, 9: 981}
First 20 Labels: [6, 9, 9, 4, 1, 1, 2, 7, 8, 3, 4, 7, 7, 2, 9, 9, 9, 3, 2, 6]

Example of Image 10:
Image - Min Value: 24 Max Value: 130
Image - Shape: (32, 32, 3)
Label - Label Id: 4 Name: deer

Implement Preprocess Functions

Normalize

In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.


In [3]:
def normalize(x):
    """
    Normalize a list of sample image data in the range of 0 to 1
    : x: List of image data.  The image shape is (32, 32, 3)
    : return: Numpy array of normalize data
    """
    # TODO: Implement Function
    maximum = np.max(x)
    minimum = np.min(x)
    return (x - minimum) / (maximum - minimum)

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_normalize(normalize)


Tests Passed

One-hot encode

Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.

Hint: Don't reinvent the wheel.


In [4]:
from sklearn import preprocessing
labels = np.array([0,1,2,3,4,5,6,7,8,9])
one_hot = preprocessing.LabelBinarizer()
one_hot.fit(labels)

def one_hot_encode(x):
    """
    One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
    : x: List of sample Labels
    : return: Numpy array of one-hot encoded labels
    """
    # TODO: Implement Function
    
    return one_hot.transform(x)


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_one_hot_encode(one_hot_encode)


Tests Passed

Randomize Data

As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.

Preprocess all the data and save it

Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.


In [5]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)

Check Point

This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.


In [6]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle
import problem_unittests as tests
import helper

# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))

Build the network

For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.

Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.

However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.

Let's begin!

Input

The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions

  • Implement neural_net_image_input
    • Return a TF Placeholder
    • Set the shape using image_shape with batch size set to None.
    • Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
  • Implement neural_net_label_input
    • Return a TF Placeholder
    • Set the shape using n_classes with batch size set to None.
    • Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
  • Implement neural_net_keep_prob_input
    • Return a TF Placeholder for dropout keep probability.
    • Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.

These names will be used at the end of the project to load your saved model.

Note: None for shapes in TensorFlow allow for a dynamic size.


In [7]:
import tensorflow as tf

def neural_net_image_input(image_shape):
    """
    Return a Tensor for a bach of image input
    : image_shape: Shape of the images
    : return: Tensor for image input.
    """
    # TODO: Implement Function

    x = tf.placeholder(tf.float32, [None, image_shape[0], image_shape[1], image_shape[2]], name='x')
    
    return x 


def neural_net_label_input(n_classes):
    """
    Return a Tensor for a batch of label input
    : n_classes: Number of classes
    : return: Tensor for label input.
    """
    # TODO: Implement Function
    
    y = tf.placeholder(tf.float32, [None, n_classes], name='y')
    
    return y 


def neural_net_keep_prob_input():
    """
    Return a Tensor for keep probability
    : return: Tensor for keep probability.
    """
    # TODO: Implement Function
    
    keep_prob = tf.placeholder(tf.float32, name='keep_prob')
    
    return keep_prob


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)


Image Input Tests Passed.
Label Input Tests Passed.
Keep Prob Tests Passed.

Convolution and Max Pooling Layer

Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:

  • Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
  • Apply a convolution to x_tensor using weight and conv_strides.
    • We recommend you use same padding, but you're welcome to use any padding.
  • Add bias
  • Add a nonlinear activation to the convolution.
  • Apply Max Pooling using pool_ksize and pool_strides.
    • We recommend you use same padding, but you're welcome to use any padding.

Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.


In [8]:
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
    """
    Apply convolution then max pooling to x_tensor
    :param x_tensor: TensorFlow Tensor
    :param conv_num_outputs: Number of outputs for the convolutional layer
    :param conv_ksize: kernal size 2-D Tuple for the convolutional layer
    :param conv_strides: Stride 2-D Tuple for convolution
    :param pool_ksize: kernal size 2-D Tuple for pool
    :param pool_strides: Stride 2-D Tuple for pool
    : return: A tensor that represents convolution and max pooling of x_tensor
    """
    # TODO: Implement Function
    

    x_shape = x_tensor.get_shape().as_list()
    xb = x_shape[0]
    xh = x_shape[1]
    xw = x_shape[2]
    xd = x_shape[3]
    
    ###
    # CHECK: random_normal or truncated_normal, mean=0.0 stdev=0.05 or 1.0
    ###
    
    weights = tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1], xd, conv_num_outputs], mean=0.0, stddev=0.05))

    biases = tf.Variable(tf.random_normal([conv_num_outputs]))
    
    def conv2d(x, W, b, strides=[1,1]):
        x = tf.nn.conv2d(x, W, strides=[1, strides[0], strides[1], 1], padding='SAME')
        x = tf.nn.bias_add(x, b)
        return tf.nn.relu(x)
    
    def maxpool2d(x, k=[2,2], s=[2,2]):
        return tf.nn.max_pool(
            x,
            ksize=[1, k[0], k[1], 1],
            strides=[1, s[0], s[1], 1],
            padding='SAME')

    conv = conv2d(x_tensor, weights, biases, strides=conv_strides)
    conv2dmax = maxpool2d(conv, k=pool_ksize, s=pool_strides)
    
    return conv2dmax 


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_con_pool(conv2d_maxpool)


Tests Passed

Flatten Layer

Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.


In [9]:
def flatten(x_tensor):
    """
    Flatten x_tensor to (Batch Size, Flattened Image Size)
    : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
    : return: A tensor of size (Batch Size, Flattened Image Size).
    """
    # TODO: Implement Function
    
    x_shape = x_tensor.get_shape().as_list()
    xb = x_shape[0]
    xh = x_shape[1]
    xw = x_shape[2]
    xd = x_shape[3]
    
    # flat = tf.reshape(x_tensor, [-1, weights['wd1'].get_shape().as_list()[0]])

    flat = tf.reshape(x_tensor, [-1, xh * xw * xd])
    
    return flat


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_flatten(flatten)


Tests Passed

Fully-Connected Layer

Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.


In [10]:
def fully_conn(x_tensor, num_outputs):
    """
    Apply a fully connected layer to x_tensor using weight and bias
    : x_tensor: A 2-D tensor where the first dimension is batch size.
    : num_outputs: The number of output that the new tensor should be.
    : return: A 2-D tensor where the second dimension is num_outputs.
    """
    # TODO: Implement Function
    
    x_shape = x_tensor.get_shape().as_list()
    xb = x_shape[0]
    xl = x_shape[1]
    
    ###
    # CHECK: random_normal or truncated_normal, mean=0.0 stdev=0.05 or 1.0
    ###
    
    weights = tf.Variable(tf.truncated_normal([xl, num_outputs], mean=0.0, stddev=0.05))
    biases = tf.Variable(tf.random_normal([num_outputs]))
    
    fc = tf.add(tf.matmul(x_tensor, weights), biases)
    fc = tf.nn.relu(fc)
    
    return fc


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_fully_conn(fully_conn)


Tests Passed

Output Layer

Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.

Note: Activation, softmax, or cross entropy should not be applied to this.


In [12]:
def output(x_tensor, num_outputs):
    """
    Apply a output layer to x_tensor using weight and bias
    : x_tensor: A 2-D tensor where the first dimension is batch size.
    : num_outputs: The number of output that the new tensor should be.
    : return: A 2-D tensor where the second dimension is num_outputs.
    """
    # TODO: Implement Function
    
    x_shape = x_tensor.get_shape().as_list()
    xb = x_shape[0]
    xl = x_shape[1]
    
    ###
    # CHECK: random_normal or truncated_normal, mean=0.0 stdev=0.05 or 1.0
    ###
    
    weights = tf.Variable(tf.truncated_normal([xl, num_outputs], mean=0.0, stddev=0.05))
    biases = tf.Variable(tf.random_normal([num_outputs]))
    
    out = tf.add(tf.matmul(x_tensor, weights), biases)
    
    return out


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_output(output)


Tests Passed

Create Convolutional Model

Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:

  • Apply 1, 2, or 3 Convolution and Max Pool layers
  • Apply a Flatten Layer
  • Apply 1, 2, or 3 Fully Connected Layers
  • Apply an Output Layer
  • Return the output
  • Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.

In [17]:
def conv_net(x, keep_prob):
    """
    Create a convolutional neural network model
    : x: Placeholder tensor that holds image data.
    : keep_prob: Placeholder tensor that hold dropout keep probability.
    : return: Tensor that represents logits
    """
    # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
    #    Play around with different number of outputs, kernel size and stride
    # Function Definition from Above:
    #    conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
    
    # CHECK - set hyperparameters
    
    conv_num_outputs = 16
    conv_ksize = [5,5]
    conv_strides = [1,1]
    pool_ksize = [2,2]
    pool_strides = [2,2]
    
    conv2dmax1 = conv2d_maxpool(x, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)

    # TODO: Apply a Flatten Layer
    # Function Definition from Above:
    #   flatten(x_tensor)
    
    flat1 = flatten(conv2dmax1)

    # TODO: Apply 1, 2, or 3 Fully Connected Layers
    #    Play around with different number of outputs
    # Function Definition from Above:
    #   fully_conn(x_tensor, num_outputs)
    
    # CHECK - set hyperparameters
    
    fc1_outputs = 1024
    
    fc1 = fully_conn(flat1, fc1_outputs)
    
    
    # TODO - ME: Apply Dropout between Fully Connected and FC or Outpit Layers
    # CHECK
    
    drop1 = tf.nn.dropout(fc1, keep_prob)
    
    # TODO: Apply an Output Layer
    #    Set this to the number of classes
    # Function Definition from Above:
    #   output(x_tensor, num_outputs)
    
    output_classes = 10
    
    out = output(drop1, output_classes)
    
    # TODO: return output
    
    return out


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""

##############################
## Build the Neural Network ##
##############################

# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()

# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()

# Model
logits = conv_net(x, keep_prob)

# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')

# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)

# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')

tests.test_conv_net(conv_net)


Neural Network Built!

Train the Neural Network

Single Optimization

Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:

  • x for image input
  • y for labels
  • keep_prob for keep probability for dropout

This function will be called for each batch, so tf.global_variables_initializer() has already been called.

Note: Nothing needs to be returned. This function is only optimizing the neural network.


In [19]:
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
    """
    Optimize the session on a batch of images and labels
    : session: Current TensorFlow session
    : optimizer: TensorFlow optimizer function
    : keep_probability: keep probability
    : feature_batch: Batch of Numpy image data
    : label_batch: Batch of Numpy label data
    """
    # TODO: Implement Function
    
    
    session.run(optimizer, feed_dict = {x: feature_batch, y: label_batch, keep_prob: keep_probability})


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_train_nn(train_neural_network)


Tests Passed

Show Stats

Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.


In [28]:
def print_stats(session, feature_batch, label_batch, cost, accuracy):
    """
    Print information about loss and validation accuracy
    : session: Current TensorFlow session
    : feature_batch: Batch of Numpy image data
    : label_batch: Batch of Numpy label data
    : cost: TensorFlow cost function
    : accuracy: TensorFlow accuracy function
    """
    # TODO: Implement Function
    
    loss = session.run(cost, feed_dict = {x: feature_batch, y: label_batch, keep_prob: 1.0})
    valid_acc = session.run(accuracy, feed_dict = {x: valid_features, y: valid_labels, keep_prob: 1.0})
    
    print('Loss: {:>6.4f} Validation Accuracy: {:.6f}'.format(
                loss,
                valid_acc))

Hyperparameters

Tune the following parameters:

  • Set epochs to the number of iterations until the network stops learning or start overfitting
  • Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
    • 64
    • 128
    • 256
    • ...
  • Set keep_probability to the probability of keeping a node using dropout

In [30]:
# TODO: Tune Parameters - Hyperparameters
epochs = 50
batch_size = 128
keep_probability = 0.50

Train on a Single CIFAR-10 Batch

Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.


In [31]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
    # Initializing the variables
    sess.run(tf.global_variables_initializer())
    
    # Training cycle
    for epoch in range(epochs):
        batch_i = 1
        for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
            train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
        print('Epoch {:>2}, CIFAR-10 Batch {}:  '.format(epoch + 1, batch_i), end='')
        print_stats(sess, batch_features, batch_labels, cost, accuracy)


Checking the Training on a Single Batch...
Epoch  1, CIFAR-10 Batch 1:  Loss: 2.1710 Validation Accuracy: 0.277000
Epoch  2, CIFAR-10 Batch 1:  Loss: 2.0535 Validation Accuracy: 0.341800
Epoch  3, CIFAR-10 Batch 1:  Loss: 1.8748 Validation Accuracy: 0.384000
Epoch  4, CIFAR-10 Batch 1:  Loss: 1.7377 Validation Accuracy: 0.416600
Epoch  5, CIFAR-10 Batch 1:  Loss: 1.6144 Validation Accuracy: 0.440000
Epoch  6, CIFAR-10 Batch 1:  Loss: 1.4437 Validation Accuracy: 0.456000
Epoch  7, CIFAR-10 Batch 1:  Loss: 1.2985 Validation Accuracy: 0.466600
Epoch  8, CIFAR-10 Batch 1:  Loss: 1.1040 Validation Accuracy: 0.481400
Epoch  9, CIFAR-10 Batch 1:  Loss: 0.9773 Validation Accuracy: 0.486600
Epoch 10, CIFAR-10 Batch 1:  Loss: 0.8861 Validation Accuracy: 0.508000
Epoch 11, CIFAR-10 Batch 1:  Loss: 0.7498 Validation Accuracy: 0.493800
Epoch 12, CIFAR-10 Batch 1:  Loss: 0.6785 Validation Accuracy: 0.513400
Epoch 13, CIFAR-10 Batch 1:  Loss: 0.6256 Validation Accuracy: 0.516000
Epoch 14, CIFAR-10 Batch 1:  Loss: 0.5512 Validation Accuracy: 0.518800
Epoch 15, CIFAR-10 Batch 1:  Loss: 0.5213 Validation Accuracy: 0.526600
Epoch 16, CIFAR-10 Batch 1:  Loss: 0.4292 Validation Accuracy: 0.517200
Epoch 17, CIFAR-10 Batch 1:  Loss: 0.3924 Validation Accuracy: 0.534800
Epoch 18, CIFAR-10 Batch 1:  Loss: 0.3462 Validation Accuracy: 0.532400
Epoch 19, CIFAR-10 Batch 1:  Loss: 0.3184 Validation Accuracy: 0.527400
Epoch 20, CIFAR-10 Batch 1:  Loss: 0.3094 Validation Accuracy: 0.526800
Epoch 21, CIFAR-10 Batch 1:  Loss: 0.2511 Validation Accuracy: 0.530600
Epoch 22, CIFAR-10 Batch 1:  Loss: 0.2036 Validation Accuracy: 0.526800
Epoch 23, CIFAR-10 Batch 1:  Loss: 0.2000 Validation Accuracy: 0.527600
Epoch 24, CIFAR-10 Batch 1:  Loss: 0.1434 Validation Accuracy: 0.535000
Epoch 25, CIFAR-10 Batch 1:  Loss: 0.1428 Validation Accuracy: 0.534200
Epoch 26, CIFAR-10 Batch 1:  Loss: 0.1235 Validation Accuracy: 0.537400
Epoch 27, CIFAR-10 Batch 1:  Loss: 0.1223 Validation Accuracy: 0.535400
Epoch 28, CIFAR-10 Batch 1:  Loss: 0.0840 Validation Accuracy: 0.539200
Epoch 29, CIFAR-10 Batch 1:  Loss: 0.0789 Validation Accuracy: 0.545400
Epoch 30, CIFAR-10 Batch 1:  Loss: 0.0684 Validation Accuracy: 0.544000
Epoch 31, CIFAR-10 Batch 1:  Loss: 0.0559 Validation Accuracy: 0.542800
Epoch 32, CIFAR-10 Batch 1:  Loss: 0.0497 Validation Accuracy: 0.548600
Epoch 33, CIFAR-10 Batch 1:  Loss: 0.0445 Validation Accuracy: 0.537000
Epoch 34, CIFAR-10 Batch 1:  Loss: 0.0618 Validation Accuracy: 0.532800
Epoch 35, CIFAR-10 Batch 1:  Loss: 0.0371 Validation Accuracy: 0.546200
Epoch 36, CIFAR-10 Batch 1:  Loss: 0.0329 Validation Accuracy: 0.543200
Epoch 37, CIFAR-10 Batch 1:  Loss: 0.0221 Validation Accuracy: 0.547800
Epoch 38, CIFAR-10 Batch 1:  Loss: 0.0277 Validation Accuracy: 0.547600
Epoch 39, CIFAR-10 Batch 1:  Loss: 0.0241 Validation Accuracy: 0.551000
Epoch 40, CIFAR-10 Batch 1:  Loss: 0.0213 Validation Accuracy: 0.550000
Epoch 41, CIFAR-10 Batch 1:  Loss: 0.0176 Validation Accuracy: 0.549000
Epoch 42, CIFAR-10 Batch 1:  Loss: 0.0168 Validation Accuracy: 0.541200
Epoch 43, CIFAR-10 Batch 1:  Loss: 0.0151 Validation Accuracy: 0.543400
Epoch 44, CIFAR-10 Batch 1:  Loss: 0.0134 Validation Accuracy: 0.545200
Epoch 45, CIFAR-10 Batch 1:  Loss: 0.0115 Validation Accuracy: 0.544200
Epoch 46, CIFAR-10 Batch 1:  Loss: 0.0113 Validation Accuracy: 0.539000
Epoch 47, CIFAR-10 Batch 1:  Loss: 0.0075 Validation Accuracy: 0.539800
Epoch 48, CIFAR-10 Batch 1:  Loss: 0.0050 Validation Accuracy: 0.546800
Epoch 49, CIFAR-10 Batch 1:  Loss: 0.0085 Validation Accuracy: 0.543800
Epoch 50, CIFAR-10 Batch 1:  Loss: 0.0039 Validation Accuracy: 0.547800

Fully Train the Model

Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.


In [32]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_model_path = './image_classification'

print('Training...')
with tf.Session() as sess:
    # Initializing the variables
    sess.run(tf.global_variables_initializer())
    
    # Training cycle
    for epoch in range(epochs):
        # Loop over all batches
        n_batches = 5
        for batch_i in range(1, n_batches + 1):
            for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
                train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
            print('Epoch {:>2}, CIFAR-10 Batch {}:  '.format(epoch + 1, batch_i), end='')
            print_stats(sess, batch_features, batch_labels, cost, accuracy)
            
    # Save Model
    saver = tf.train.Saver()
    save_path = saver.save(sess, save_model_path)


Training...
Epoch  1, CIFAR-10 Batch 1:  Loss: 2.1682 Validation Accuracy: 0.245200
Epoch  1, CIFAR-10 Batch 2:  Loss: 1.9692 Validation Accuracy: 0.305800
Epoch  1, CIFAR-10 Batch 3:  Loss: 1.6953 Validation Accuracy: 0.334600
Epoch  1, CIFAR-10 Batch 4:  Loss: 1.6757 Validation Accuracy: 0.368800
Epoch  1, CIFAR-10 Batch 5:  Loss: 1.8138 Validation Accuracy: 0.381200
Epoch  2, CIFAR-10 Batch 1:  Loss: 1.9366 Validation Accuracy: 0.401600
Epoch  2, CIFAR-10 Batch 2:  Loss: 1.7074 Validation Accuracy: 0.421200
Epoch  2, CIFAR-10 Batch 3:  Loss: 1.4385 Validation Accuracy: 0.417400
Epoch  2, CIFAR-10 Batch 4:  Loss: 1.3967 Validation Accuracy: 0.437400
Epoch  2, CIFAR-10 Batch 5:  Loss: 1.5639 Validation Accuracy: 0.461600
Epoch  3, CIFAR-10 Batch 1:  Loss: 1.7150 Validation Accuracy: 0.472000
Epoch  3, CIFAR-10 Batch 2:  Loss: 1.3321 Validation Accuracy: 0.481200
Epoch  3, CIFAR-10 Batch 3:  Loss: 1.2352 Validation Accuracy: 0.479600
Epoch  3, CIFAR-10 Batch 4:  Loss: 1.1838 Validation Accuracy: 0.487400
Epoch  3, CIFAR-10 Batch 5:  Loss: 1.4177 Validation Accuracy: 0.500600
Epoch  4, CIFAR-10 Batch 1:  Loss: 1.4858 Validation Accuracy: 0.506800
Epoch  4, CIFAR-10 Batch 2:  Loss: 1.2226 Validation Accuracy: 0.518400
Epoch  4, CIFAR-10 Batch 3:  Loss: 1.0714 Validation Accuracy: 0.506800
Epoch  4, CIFAR-10 Batch 4:  Loss: 1.0443 Validation Accuracy: 0.522200
Epoch  4, CIFAR-10 Batch 5:  Loss: 1.3094 Validation Accuracy: 0.529000
Epoch  5, CIFAR-10 Batch 1:  Loss: 1.3084 Validation Accuracy: 0.525800
Epoch  5, CIFAR-10 Batch 2:  Loss: 1.0858 Validation Accuracy: 0.534200
Epoch  5, CIFAR-10 Batch 3:  Loss: 0.9684 Validation Accuracy: 0.527200
Epoch  5, CIFAR-10 Batch 4:  Loss: 0.9636 Validation Accuracy: 0.542800
Epoch  5, CIFAR-10 Batch 5:  Loss: 1.2307 Validation Accuracy: 0.540200
Epoch  6, CIFAR-10 Batch 1:  Loss: 1.2166 Validation Accuracy: 0.552000
Epoch  6, CIFAR-10 Batch 2:  Loss: 1.0311 Validation Accuracy: 0.548600
Epoch  6, CIFAR-10 Batch 3:  Loss: 0.8477 Validation Accuracy: 0.546000
Epoch  6, CIFAR-10 Batch 4:  Loss: 0.9093 Validation Accuracy: 0.562600
Epoch  6, CIFAR-10 Batch 5:  Loss: 1.1450 Validation Accuracy: 0.556200
Epoch  7, CIFAR-10 Batch 1:  Loss: 1.1700 Validation Accuracy: 0.553000
Epoch  7, CIFAR-10 Batch 2:  Loss: 0.9337 Validation Accuracy: 0.566400
Epoch  7, CIFAR-10 Batch 3:  Loss: 0.7717 Validation Accuracy: 0.558000
Epoch  7, CIFAR-10 Batch 4:  Loss: 0.8488 Validation Accuracy: 0.575800
Epoch  7, CIFAR-10 Batch 5:  Loss: 0.9584 Validation Accuracy: 0.577000
Epoch  8, CIFAR-10 Batch 1:  Loss: 1.0283 Validation Accuracy: 0.572200
Epoch  8, CIFAR-10 Batch 2:  Loss: 0.8304 Validation Accuracy: 0.574400
Epoch  8, CIFAR-10 Batch 3:  Loss: 0.6585 Validation Accuracy: 0.570200
Epoch  8, CIFAR-10 Batch 4:  Loss: 0.7723 Validation Accuracy: 0.584800
Epoch  8, CIFAR-10 Batch 5:  Loss: 0.8732 Validation Accuracy: 0.588400
Epoch  9, CIFAR-10 Batch 1:  Loss: 0.8746 Validation Accuracy: 0.585400
Epoch  9, CIFAR-10 Batch 2:  Loss: 0.7274 Validation Accuracy: 0.593800
Epoch  9, CIFAR-10 Batch 3:  Loss: 0.6241 Validation Accuracy: 0.596800
Epoch  9, CIFAR-10 Batch 4:  Loss: 0.7426 Validation Accuracy: 0.597400
Epoch  9, CIFAR-10 Batch 5:  Loss: 0.7775 Validation Accuracy: 0.602000
Epoch 10, CIFAR-10 Batch 1:  Loss: 0.7777 Validation Accuracy: 0.599600
Epoch 10, CIFAR-10 Batch 2:  Loss: 0.6317 Validation Accuracy: 0.604400
Epoch 10, CIFAR-10 Batch 3:  Loss: 0.6059 Validation Accuracy: 0.608200
Epoch 10, CIFAR-10 Batch 4:  Loss: 0.6277 Validation Accuracy: 0.608800
Epoch 10, CIFAR-10 Batch 5:  Loss: 0.6788 Validation Accuracy: 0.604400
Epoch 11, CIFAR-10 Batch 1:  Loss: 0.6336 Validation Accuracy: 0.614600
Epoch 11, CIFAR-10 Batch 2:  Loss: 0.5281 Validation Accuracy: 0.606800
Epoch 11, CIFAR-10 Batch 3:  Loss: 0.4884 Validation Accuracy: 0.610400
Epoch 11, CIFAR-10 Batch 4:  Loss: 0.5520 Validation Accuracy: 0.616600
Epoch 11, CIFAR-10 Batch 5:  Loss: 0.6040 Validation Accuracy: 0.612200
Epoch 12, CIFAR-10 Batch 1:  Loss: 0.5412 Validation Accuracy: 0.606200
Epoch 12, CIFAR-10 Batch 2:  Loss: 0.4745 Validation Accuracy: 0.620600
Epoch 12, CIFAR-10 Batch 3:  Loss: 0.4478 Validation Accuracy: 0.617600
Epoch 12, CIFAR-10 Batch 4:  Loss: 0.4991 Validation Accuracy: 0.619000
Epoch 12, CIFAR-10 Batch 5:  Loss: 0.5203 Validation Accuracy: 0.621000
Epoch 13, CIFAR-10 Batch 1:  Loss: 0.4307 Validation Accuracy: 0.628400
Epoch 13, CIFAR-10 Batch 2:  Loss: 0.4541 Validation Accuracy: 0.619800
Epoch 13, CIFAR-10 Batch 3:  Loss: 0.3769 Validation Accuracy: 0.626600
Epoch 13, CIFAR-10 Batch 4:  Loss: 0.4390 Validation Accuracy: 0.626000
Epoch 13, CIFAR-10 Batch 5:  Loss: 0.4513 Validation Accuracy: 0.636600
Epoch 14, CIFAR-10 Batch 1:  Loss: 0.3800 Validation Accuracy: 0.627000
Epoch 14, CIFAR-10 Batch 2:  Loss: 0.3545 Validation Accuracy: 0.634200
Epoch 14, CIFAR-10 Batch 3:  Loss: 0.3682 Validation Accuracy: 0.637400
Epoch 14, CIFAR-10 Batch 4:  Loss: 0.3849 Validation Accuracy: 0.629000
Epoch 14, CIFAR-10 Batch 5:  Loss: 0.4210 Validation Accuracy: 0.626000
Epoch 15, CIFAR-10 Batch 1:  Loss: 0.3436 Validation Accuracy: 0.630200
Epoch 15, CIFAR-10 Batch 2:  Loss: 0.3076 Validation Accuracy: 0.623800
Epoch 15, CIFAR-10 Batch 3:  Loss: 0.3246 Validation Accuracy: 0.639600
Epoch 15, CIFAR-10 Batch 4:  Loss: 0.3511 Validation Accuracy: 0.633800
Epoch 15, CIFAR-10 Batch 5:  Loss: 0.3204 Validation Accuracy: 0.633400
Epoch 16, CIFAR-10 Batch 1:  Loss: 0.2795 Validation Accuracy: 0.639200
Epoch 16, CIFAR-10 Batch 2:  Loss: 0.2586 Validation Accuracy: 0.632000
Epoch 16, CIFAR-10 Batch 3:  Loss: 0.2778 Validation Accuracy: 0.643200
Epoch 16, CIFAR-10 Batch 4:  Loss: 0.2845 Validation Accuracy: 0.627800
Epoch 16, CIFAR-10 Batch 5:  Loss: 0.3246 Validation Accuracy: 0.634800
Epoch 17, CIFAR-10 Batch 1:  Loss: 0.2444 Validation Accuracy: 0.647600
Epoch 17, CIFAR-10 Batch 2:  Loss: 0.2289 Validation Accuracy: 0.631000
Epoch 17, CIFAR-10 Batch 3:  Loss: 0.2586 Validation Accuracy: 0.636000
Epoch 17, CIFAR-10 Batch 4:  Loss: 0.2097 Validation Accuracy: 0.642000
Epoch 17, CIFAR-10 Batch 5:  Loss: 0.2146 Validation Accuracy: 0.640000
Epoch 18, CIFAR-10 Batch 1:  Loss: 0.2084 Validation Accuracy: 0.634200
Epoch 18, CIFAR-10 Batch 2:  Loss: 0.2228 Validation Accuracy: 0.642400
Epoch 18, CIFAR-10 Batch 3:  Loss: 0.1891 Validation Accuracy: 0.644400
Epoch 18, CIFAR-10 Batch 4:  Loss: 0.1891 Validation Accuracy: 0.634400
Epoch 18, CIFAR-10 Batch 5:  Loss: 0.1816 Validation Accuracy: 0.640400
Epoch 19, CIFAR-10 Batch 1:  Loss: 0.1792 Validation Accuracy: 0.645600
Epoch 19, CIFAR-10 Batch 2:  Loss: 0.1646 Validation Accuracy: 0.645600
Epoch 19, CIFAR-10 Batch 3:  Loss: 0.2064 Validation Accuracy: 0.634200
Epoch 19, CIFAR-10 Batch 4:  Loss: 0.1571 Validation Accuracy: 0.643200
Epoch 19, CIFAR-10 Batch 5:  Loss: 0.1527 Validation Accuracy: 0.647000
Epoch 20, CIFAR-10 Batch 1:  Loss: 0.1492 Validation Accuracy: 0.650000
Epoch 20, CIFAR-10 Batch 2:  Loss: 0.1321 Validation Accuracy: 0.642600
Epoch 20, CIFAR-10 Batch 3:  Loss: 0.1491 Validation Accuracy: 0.649800
Epoch 20, CIFAR-10 Batch 4:  Loss: 0.1488 Validation Accuracy: 0.638600
Epoch 20, CIFAR-10 Batch 5:  Loss: 0.1462 Validation Accuracy: 0.640800
Epoch 21, CIFAR-10 Batch 1:  Loss: 0.1220 Validation Accuracy: 0.649600
Epoch 21, CIFAR-10 Batch 2:  Loss: 0.1075 Validation Accuracy: 0.646200
Epoch 21, CIFAR-10 Batch 3:  Loss: 0.1630 Validation Accuracy: 0.643800
Epoch 21, CIFAR-10 Batch 4:  Loss: 0.1286 Validation Accuracy: 0.645000
Epoch 21, CIFAR-10 Batch 5:  Loss: 0.1495 Validation Accuracy: 0.642600
Epoch 22, CIFAR-10 Batch 1:  Loss: 0.1165 Validation Accuracy: 0.645200
Epoch 22, CIFAR-10 Batch 2:  Loss: 0.1137 Validation Accuracy: 0.644200
Epoch 22, CIFAR-10 Batch 3:  Loss: 0.1348 Validation Accuracy: 0.647200
Epoch 22, CIFAR-10 Batch 4:  Loss: 0.0872 Validation Accuracy: 0.635800
Epoch 22, CIFAR-10 Batch 5:  Loss: 0.0994 Validation Accuracy: 0.646800
Epoch 23, CIFAR-10 Batch 1:  Loss: 0.1093 Validation Accuracy: 0.642000
Epoch 23, CIFAR-10 Batch 2:  Loss: 0.0813 Validation Accuracy: 0.650400
Epoch 23, CIFAR-10 Batch 3:  Loss: 0.1254 Validation Accuracy: 0.643600
Epoch 23, CIFAR-10 Batch 4:  Loss: 0.0827 Validation Accuracy: 0.638800
Epoch 23, CIFAR-10 Batch 5:  Loss: 0.0964 Validation Accuracy: 0.646000
Epoch 24, CIFAR-10 Batch 1:  Loss: 0.0701 Validation Accuracy: 0.650400
Epoch 24, CIFAR-10 Batch 2:  Loss: 0.0831 Validation Accuracy: 0.643000
Epoch 24, CIFAR-10 Batch 3:  Loss: 0.1029 Validation Accuracy: 0.652400
Epoch 24, CIFAR-10 Batch 4:  Loss: 0.0649 Validation Accuracy: 0.648400
Epoch 24, CIFAR-10 Batch 5:  Loss: 0.0758 Validation Accuracy: 0.655800
Epoch 25, CIFAR-10 Batch 1:  Loss: 0.0578 Validation Accuracy: 0.651200
Epoch 25, CIFAR-10 Batch 2:  Loss: 0.0671 Validation Accuracy: 0.642400
Epoch 25, CIFAR-10 Batch 3:  Loss: 0.1094 Validation Accuracy: 0.652000
Epoch 25, CIFAR-10 Batch 4:  Loss: 0.0558 Validation Accuracy: 0.641200
Epoch 25, CIFAR-10 Batch 5:  Loss: 0.0783 Validation Accuracy: 0.651000
Epoch 26, CIFAR-10 Batch 1:  Loss: 0.0509 Validation Accuracy: 0.644000
Epoch 26, CIFAR-10 Batch 2:  Loss: 0.0505 Validation Accuracy: 0.648000
Epoch 26, CIFAR-10 Batch 3:  Loss: 0.0999 Validation Accuracy: 0.652400
Epoch 26, CIFAR-10 Batch 4:  Loss: 0.0432 Validation Accuracy: 0.634800
Epoch 26, CIFAR-10 Batch 5:  Loss: 0.0687 Validation Accuracy: 0.652000
Epoch 27, CIFAR-10 Batch 1:  Loss: 0.0496 Validation Accuracy: 0.655200
Epoch 27, CIFAR-10 Batch 2:  Loss: 0.0489 Validation Accuracy: 0.642200
Epoch 27, CIFAR-10 Batch 3:  Loss: 0.0806 Validation Accuracy: 0.651600
Epoch 27, CIFAR-10 Batch 4:  Loss: 0.0435 Validation Accuracy: 0.638200
Epoch 27, CIFAR-10 Batch 5:  Loss: 0.0447 Validation Accuracy: 0.652000
Epoch 28, CIFAR-10 Batch 1:  Loss: 0.0346 Validation Accuracy: 0.657000
Epoch 28, CIFAR-10 Batch 2:  Loss: 0.0418 Validation Accuracy: 0.652200
Epoch 28, CIFAR-10 Batch 3:  Loss: 0.0592 Validation Accuracy: 0.648000
Epoch 28, CIFAR-10 Batch 4:  Loss: 0.0267 Validation Accuracy: 0.638800
Epoch 28, CIFAR-10 Batch 5:  Loss: 0.0424 Validation Accuracy: 0.644600
Epoch 29, CIFAR-10 Batch 1:  Loss: 0.0492 Validation Accuracy: 0.651400
Epoch 29, CIFAR-10 Batch 2:  Loss: 0.0445 Validation Accuracy: 0.648800
Epoch 29, CIFAR-10 Batch 3:  Loss: 0.0584 Validation Accuracy: 0.644600
Epoch 29, CIFAR-10 Batch 4:  Loss: 0.0359 Validation Accuracy: 0.639400
Epoch 29, CIFAR-10 Batch 5:  Loss: 0.0346 Validation Accuracy: 0.648800
Epoch 30, CIFAR-10 Batch 1:  Loss: 0.0321 Validation Accuracy: 0.650200
Epoch 30, CIFAR-10 Batch 2:  Loss: 0.0326 Validation Accuracy: 0.648000
Epoch 30, CIFAR-10 Batch 3:  Loss: 0.0493 Validation Accuracy: 0.649600
Epoch 30, CIFAR-10 Batch 4:  Loss: 0.0243 Validation Accuracy: 0.647200
Epoch 30, CIFAR-10 Batch 5:  Loss: 0.0323 Validation Accuracy: 0.649600
Epoch 31, CIFAR-10 Batch 1:  Loss: 0.0224 Validation Accuracy: 0.655000
Epoch 31, CIFAR-10 Batch 2:  Loss: 0.0222 Validation Accuracy: 0.646800
Epoch 31, CIFAR-10 Batch 3:  Loss: 0.0378 Validation Accuracy: 0.638400
Epoch 31, CIFAR-10 Batch 4:  Loss: 0.0260 Validation Accuracy: 0.642800
Epoch 31, CIFAR-10 Batch 5:  Loss: 0.0374 Validation Accuracy: 0.649000
Epoch 32, CIFAR-10 Batch 1:  Loss: 0.0242 Validation Accuracy: 0.651400
Epoch 32, CIFAR-10 Batch 2:  Loss: 0.0290 Validation Accuracy: 0.645600
Epoch 32, CIFAR-10 Batch 3:  Loss: 0.0420 Validation Accuracy: 0.641800
Epoch 32, CIFAR-10 Batch 4:  Loss: 0.0179 Validation Accuracy: 0.639200
Epoch 32, CIFAR-10 Batch 5:  Loss: 0.0367 Validation Accuracy: 0.648800
Epoch 33, CIFAR-10 Batch 1:  Loss: 0.0174 Validation Accuracy: 0.645600
Epoch 33, CIFAR-10 Batch 2:  Loss: 0.0231 Validation Accuracy: 0.649600
Epoch 33, CIFAR-10 Batch 3:  Loss: 0.0323 Validation Accuracy: 0.645200
Epoch 33, CIFAR-10 Batch 4:  Loss: 0.0152 Validation Accuracy: 0.638000
Epoch 33, CIFAR-10 Batch 5:  Loss: 0.0219 Validation Accuracy: 0.646400
Epoch 34, CIFAR-10 Batch 1:  Loss: 0.0199 Validation Accuracy: 0.643800
Epoch 34, CIFAR-10 Batch 2:  Loss: 0.0275 Validation Accuracy: 0.644800
Epoch 34, CIFAR-10 Batch 3:  Loss: 0.0256 Validation Accuracy: 0.646600
Epoch 34, CIFAR-10 Batch 4:  Loss: 0.0158 Validation Accuracy: 0.637200
Epoch 34, CIFAR-10 Batch 5:  Loss: 0.0165 Validation Accuracy: 0.645800
Epoch 35, CIFAR-10 Batch 1:  Loss: 0.0313 Validation Accuracy: 0.648200
Epoch 35, CIFAR-10 Batch 2:  Loss: 0.0139 Validation Accuracy: 0.644000
Epoch 35, CIFAR-10 Batch 3:  Loss: 0.0287 Validation Accuracy: 0.645000
Epoch 35, CIFAR-10 Batch 4:  Loss: 0.0178 Validation Accuracy: 0.637600
Epoch 35, CIFAR-10 Batch 5:  Loss: 0.0133 Validation Accuracy: 0.648400
Epoch 36, CIFAR-10 Batch 1:  Loss: 0.0188 Validation Accuracy: 0.646000
Epoch 36, CIFAR-10 Batch 2:  Loss: 0.0097 Validation Accuracy: 0.647200
Epoch 36, CIFAR-10 Batch 3:  Loss: 0.0183 Validation Accuracy: 0.645800
Epoch 36, CIFAR-10 Batch 4:  Loss: 0.0214 Validation Accuracy: 0.632400
Epoch 36, CIFAR-10 Batch 5:  Loss: 0.0201 Validation Accuracy: 0.637000
Epoch 37, CIFAR-10 Batch 1:  Loss: 0.0195 Validation Accuracy: 0.643000
Epoch 37, CIFAR-10 Batch 2:  Loss: 0.0079 Validation Accuracy: 0.641200
Epoch 37, CIFAR-10 Batch 3:  Loss: 0.0160 Validation Accuracy: 0.639400
Epoch 37, CIFAR-10 Batch 4:  Loss: 0.0110 Validation Accuracy: 0.635600
Epoch 37, CIFAR-10 Batch 5:  Loss: 0.0235 Validation Accuracy: 0.641000
Epoch 38, CIFAR-10 Batch 1:  Loss: 0.0145 Validation Accuracy: 0.644400
Epoch 38, CIFAR-10 Batch 2:  Loss: 0.0088 Validation Accuracy: 0.642600
Epoch 38, CIFAR-10 Batch 3:  Loss: 0.0225 Validation Accuracy: 0.633400
Epoch 38, CIFAR-10 Batch 4:  Loss: 0.0166 Validation Accuracy: 0.635400
Epoch 38, CIFAR-10 Batch 5:  Loss: 0.0168 Validation Accuracy: 0.639800
Epoch 39, CIFAR-10 Batch 1:  Loss: 0.0166 Validation Accuracy: 0.641000
Epoch 39, CIFAR-10 Batch 2:  Loss: 0.0087 Validation Accuracy: 0.641400
Epoch 39, CIFAR-10 Batch 3:  Loss: 0.0163 Validation Accuracy: 0.640200
Epoch 39, CIFAR-10 Batch 4:  Loss: 0.0244 Validation Accuracy: 0.635200
Epoch 39, CIFAR-10 Batch 5:  Loss: 0.0110 Validation Accuracy: 0.643600
Epoch 40, CIFAR-10 Batch 1:  Loss: 0.0123 Validation Accuracy: 0.649800
Epoch 40, CIFAR-10 Batch 2:  Loss: 0.0099 Validation Accuracy: 0.651000
Epoch 40, CIFAR-10 Batch 3:  Loss: 0.0135 Validation Accuracy: 0.641800
Epoch 40, CIFAR-10 Batch 4:  Loss: 0.0074 Validation Accuracy: 0.636200
Epoch 40, CIFAR-10 Batch 5:  Loss: 0.0109 Validation Accuracy: 0.637400
Epoch 41, CIFAR-10 Batch 1:  Loss: 0.0074 Validation Accuracy: 0.644000
Epoch 41, CIFAR-10 Batch 2:  Loss: 0.0054 Validation Accuracy: 0.644400
Epoch 41, CIFAR-10 Batch 3:  Loss: 0.0150 Validation Accuracy: 0.645200
Epoch 41, CIFAR-10 Batch 4:  Loss: 0.0065 Validation Accuracy: 0.632000
Epoch 41, CIFAR-10 Batch 5:  Loss: 0.0134 Validation Accuracy: 0.629200
Epoch 42, CIFAR-10 Batch 1:  Loss: 0.0085 Validation Accuracy: 0.640800
Epoch 42, CIFAR-10 Batch 2:  Loss: 0.0040 Validation Accuracy: 0.646800
Epoch 42, CIFAR-10 Batch 3:  Loss: 0.0085 Validation Accuracy: 0.635200
Epoch 42, CIFAR-10 Batch 4:  Loss: 0.0120 Validation Accuracy: 0.630800
Epoch 42, CIFAR-10 Batch 5:  Loss: 0.0212 Validation Accuracy: 0.646600
Epoch 43, CIFAR-10 Batch 1:  Loss: 0.0077 Validation Accuracy: 0.639400
Epoch 43, CIFAR-10 Batch 2:  Loss: 0.0039 Validation Accuracy: 0.641400
Epoch 43, CIFAR-10 Batch 3:  Loss: 0.0119 Validation Accuracy: 0.642400
Epoch 43, CIFAR-10 Batch 4:  Loss: 0.0063 Validation Accuracy: 0.630600
Epoch 43, CIFAR-10 Batch 5:  Loss: 0.0128 Validation Accuracy: 0.643000
Epoch 44, CIFAR-10 Batch 1:  Loss: 0.0100 Validation Accuracy: 0.643400
Epoch 44, CIFAR-10 Batch 2:  Loss: 0.0096 Validation Accuracy: 0.648600
Epoch 44, CIFAR-10 Batch 3:  Loss: 0.0067 Validation Accuracy: 0.638000
Epoch 44, CIFAR-10 Batch 4:  Loss: 0.0058 Validation Accuracy: 0.638400
Epoch 44, CIFAR-10 Batch 5:  Loss: 0.0133 Validation Accuracy: 0.653600
Epoch 45, CIFAR-10 Batch 1:  Loss: 0.0058 Validation Accuracy: 0.647600
Epoch 45, CIFAR-10 Batch 2:  Loss: 0.0066 Validation Accuracy: 0.642400
Epoch 45, CIFAR-10 Batch 3:  Loss: 0.0087 Validation Accuracy: 0.651600
Epoch 45, CIFAR-10 Batch 4:  Loss: 0.0031 Validation Accuracy: 0.638800
Epoch 45, CIFAR-10 Batch 5:  Loss: 0.0109 Validation Accuracy: 0.653200
Epoch 46, CIFAR-10 Batch 1:  Loss: 0.0046 Validation Accuracy: 0.652000
Epoch 46, CIFAR-10 Batch 2:  Loss: 0.0030 Validation Accuracy: 0.650000
Epoch 46, CIFAR-10 Batch 3:  Loss: 0.0101 Validation Accuracy: 0.650000
Epoch 46, CIFAR-10 Batch 4:  Loss: 0.0052 Validation Accuracy: 0.643400
Epoch 46, CIFAR-10 Batch 5:  Loss: 0.0068 Validation Accuracy: 0.652600
Epoch 47, CIFAR-10 Batch 1:  Loss: 0.0052 Validation Accuracy: 0.631400
Epoch 47, CIFAR-10 Batch 2:  Loss: 0.0048 Validation Accuracy: 0.650600
Epoch 47, CIFAR-10 Batch 3:  Loss: 0.0146 Validation Accuracy: 0.646400
Epoch 47, CIFAR-10 Batch 4:  Loss: 0.0050 Validation Accuracy: 0.637800
Epoch 47, CIFAR-10 Batch 5:  Loss: 0.0048 Validation Accuracy: 0.644000
Epoch 48, CIFAR-10 Batch 1:  Loss: 0.0049 Validation Accuracy: 0.641200
Epoch 48, CIFAR-10 Batch 2:  Loss: 0.0059 Validation Accuracy: 0.644600
Epoch 48, CIFAR-10 Batch 3:  Loss: 0.0098 Validation Accuracy: 0.648600
Epoch 48, CIFAR-10 Batch 4:  Loss: 0.0030 Validation Accuracy: 0.635200
Epoch 48, CIFAR-10 Batch 5:  Loss: 0.0039 Validation Accuracy: 0.648600
Epoch 49, CIFAR-10 Batch 1:  Loss: 0.0075 Validation Accuracy: 0.647800
Epoch 49, CIFAR-10 Batch 2:  Loss: 0.0033 Validation Accuracy: 0.652600
Epoch 49, CIFAR-10 Batch 3:  Loss: 0.0077 Validation Accuracy: 0.646800
Epoch 49, CIFAR-10 Batch 4:  Loss: 0.0065 Validation Accuracy: 0.634400
Epoch 49, CIFAR-10 Batch 5:  Loss: 0.0042 Validation Accuracy: 0.649800
Epoch 50, CIFAR-10 Batch 1:  Loss: 0.0029 Validation Accuracy: 0.642200
Epoch 50, CIFAR-10 Batch 2:  Loss: 0.0047 Validation Accuracy: 0.648200
Epoch 50, CIFAR-10 Batch 3:  Loss: 0.0065 Validation Accuracy: 0.649600
Epoch 50, CIFAR-10 Batch 4:  Loss: 0.0028 Validation Accuracy: 0.647200
Epoch 50, CIFAR-10 Batch 5:  Loss: 0.0029 Validation Accuracy: 0.643400

Checkpoint

The model has been saved to disk.

Test Model

Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.


In [33]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'

import tensorflow as tf
import pickle
import helper
import random

# Set batch size if not already set
try:
    if batch_size:
        pass
except NameError:
    batch_size = 64

save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3

def test_model():
    """
    Test the saved model against the test dataset
    """

    test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb'))
    loaded_graph = tf.Graph()

    with tf.Session(graph=loaded_graph) as sess:
        # Load model
        loader = tf.train.import_meta_graph(save_model_path + '.meta')
        loader.restore(sess, save_model_path)

        # Get Tensors from loaded model
        loaded_x = loaded_graph.get_tensor_by_name('x:0')
        loaded_y = loaded_graph.get_tensor_by_name('y:0')
        loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
        loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
        loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
        
        # Get accuracy in batches for memory limitations
        test_batch_acc_total = 0
        test_batch_count = 0
        
        for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
            test_batch_acc_total += sess.run(
                loaded_acc,
                feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0})
            test_batch_count += 1

        print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))

        # Print Random Samples
        random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
        random_test_predictions = sess.run(
            tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
            feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
        helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)


test_model()


Testing Accuracy: 0.6308346518987342

Why 50-70% Accuracy?

You might be wondering why you can't get an accuracy any higher. First things first, 50% isn't bad for a simple CNN. Pure guessing would get you 10% accuracy. However, you might notice people are getting scores well above 70%. That's because we haven't taught you all there is to know about neural networks. We still need to cover a few more techniques.

Submitting This Project

When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_image_classification.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.


In [ ]: